154 research outputs found
Evaluating Point Cloud Quality via Transformational Complexity
Full-reference point cloud quality assessment (FR-PCQA) aims to infer the
quality of distorted point clouds with available references. Merging the
research of cognitive science and intuition of the human visual system (HVS),
the difference between the expected perceptual result and the practical
perception reproduction in the visual center of the cerebral cortex indicates
the subjective quality degradation. Therefore in this paper, we try to derive
the point cloud quality by measuring the complexity of transforming the
distorted point cloud back to its reference, which in practice can be
approximated by the code length of one point cloud when the other is given. For
this purpose, we first segment the reference and the distorted point cloud into
a series of local patch pairs based on one 3D Voronoi diagram. Next, motivated
by the predictive coding theory, we utilize one space-aware vector
autoregressive (SA-VAR) model to encode the geometry and color channels of each
reference patch in cases with and without the distorted patch, respectively.
Specifically, supposing that the residual errors follow the multi-variate
Gaussian distributions, we calculate the self-complexity of the reference and
the transformational complexity between the reference and the distorted sample
via covariance matrices. Besides the complexity terms, the prediction terms
generated by SA-VAR are introduced as one auxiliary feature to promote the
final quality prediction. Extensive experiments on five public point cloud
quality databases demonstrate that the transformational complexity based
distortion metric (TCDM) produces state-of-the-art (SOTA) results, and ablation
studies have further shown that our metric can be generalized to various
scenarios with consistent performance by examining its key modules and
parameters
GPA-Net:No-Reference Point Cloud Quality Assessment with Multi-task Graph Convolutional Network
With the rapid development of 3D vision, point cloud has become an
increasingly popular 3D visual media content. Due to the irregular structure,
point cloud has posed novel challenges to the related research, such as
compression, transmission, rendering and quality assessment. In these latest
researches, point cloud quality assessment (PCQA) has attracted wide attention
due to its significant role in guiding practical applications, especially in
many cases where the reference point cloud is unavailable. However, current
no-reference metrics which based on prevalent deep neural network have apparent
disadvantages. For example, to adapt to the irregular structure of point cloud,
they require preprocessing such as voxelization and projection that introduce
extra distortions, and the applied grid-kernel networks, such as Convolutional
Neural Networks, fail to extract effective distortion-related features.
Besides, they rarely consider the various distortion patterns and the
philosophy that PCQA should exhibit shifting, scaling, and rotational
invariance. In this paper, we propose a novel no-reference PCQA metric named
the Graph convolutional PCQA network (GPA-Net). To extract effective features
for PCQA, we propose a new graph convolution kernel, i.e., GPAConv, which
attentively captures the perturbation of structure and texture. Then, we
propose the multi-task framework consisting of one main task (quality
regression) and two auxiliary tasks (distortion type and degree predictions).
Finally, we propose a coordinate normalization module to stabilize the results
of GPAConv under shift, scale and rotation transformations. Experimental
results on two independent databases show that GPA-Net achieves the best
performance compared to the state-of-the-art no-reference PCQA metrics, even
better than some full-reference metrics in some cases
A novel method for high accuracy sumoylation site prediction from protein sequences
<p>Abstract</p> <p>Background</p> <p>Protein sumoylation is an essential dynamic, reversible post translational modification that plays a role in dozens of cellular activities, especially the regulation of gene expression and the maintenance of genomic stability. Currently, the complexities of sumoylation mechanism can not be perfectly solved by experimental approaches. In this regard, computational approaches might represent a promising method to direct experimental identification of sumoylation sites and shed light on the understanding of the reaction mechanism.</p> <p>Results</p> <p>Here we presented a statistical method for sumoylation site prediction. A 5-fold cross validation test over the experimentally identified sumoylation sites yielded excellent prediction performance with correlation coefficient, specificity, sensitivity and accuracy equal to 0.6364, 97.67%, 73.96% and 96.71% respectively. Additionally, the predictor performance is maintained when high level homologs are removed.</p> <p>Conclusion</p> <p>By using a statistical method, we have developed a new SUMO site prediction method – SUMOpre, which has shown its great accuracy with correlation coefficient, specificity, sensitivity and accuracy.</p
Reconstruction Distortion of Learned Image Compression with Imperceptible Perturbations
Learned Image Compression (LIC) has recently become the trending technique
for image transmission due to its notable performance. Despite its popularity,
the robustness of LIC with respect to the quality of image reconstruction
remains under-explored. In this paper, we introduce an imperceptible attack
approach designed to effectively degrade the reconstruction quality of LIC,
resulting in the reconstructed image being severely disrupted by noise where
any object in the reconstructed images is virtually impossible. More
specifically, we generate adversarial examples by introducing a Frobenius
norm-based loss function to maximize the discrepancy between original images
and reconstructed adversarial examples. Further, leveraging the insensitivity
of high-frequency components to human vision, we introduce Imperceptibility
Constraint (IC) to ensure that the perturbations remain inconspicuous.
Experiments conducted on the Kodak dataset using various LIC models demonstrate
effectiveness. In addition, we provide several findings and suggestions for
designing future defenses.Comment: 7 page
Substitutional neural image compression
First author draf
Prevalence of Splanchnic Vein Thrombosis in Pancreatitis: A Systematic Review and Meta-Analysis of Observational Studies
Splanchnic vein thrombosis (SVT) may be negatively associated with the prognosis of pancreatitis. We performed a systematic review and meta-analysis of literatures to explore the prevalence of SVT in pancreatitis. All observational studies regarding the prevalence of SVT in pancreatitis were identified via PubMed and EMBASE databases. The prevalence of SVT was pooled in the total of patients with pancreatitis. And it was also pooled in the subgroup analyses according to the stage and causes of pancreatitis, location of SVT, and regions where the studies were performed. After the review of 714 studies, 44 studies fulfilled the inclusion criteria. Meta-analyses showed a pooled prevalence of SVT of 13.6% in pancreatitis. According to the stage of pancreatitis, the pooled prevalence of SVT was 16.6% and 11.6% in patients with acute and chronic pancreatitis, respectively. According to the causes of pancreatitis, the pooled prevalence of SVT was 12.2% and 14.6% in patients with hereditary and autoimmune pancreatitis. According to the location of SVT, the pooled prevalence of portal vein, splenic vein, and mesenteric vein thrombosis was 6.2%, 11.2%, and 2.7% in pancreatitis. The prevalence of SVT in pancreatitis was 16.9%, 11.5%, and 8.5% in Europe, America, and Asia, respectively
- …